![]() create, stream and view 3d content
专利摘要:
A method is disclosed for allowing a custom version of video session to be created for presentation on at least one viewing device. game metadata generated from a live or recorded video feed is received. game metadata includes three-dimensional modeling data associated with live or recorded video feed. viewer metadata collected from a plurality of viewer devices is received. viewer metadata includes information regarding a plurality of responses from a plurality of viewers for a video session presentation on a plurality of viewing devices. additional game metadata based on game metadata and viewer metadata is created. Additional game metadata includes camera data based on three-dimensional modeling data. additional game metadata is integrated into game metadata for at least near real-time presentation of the custom version of the video session. 公开号:BR112019011452A2 申请号:R112019011452 申请日:2017-12-09 公开日:2019-10-15 发明作者:Myhill Adam 申请人:Unity IPR ApS; IPC主号:
专利说明:
CREATE, TRANSMIT AND VIEW 3D CONTENT REFERENCE TO RELATED PATENT APPLICATIONS [001] This patent application claims the benefit of US Provisional Patent Application Serial Number 62 / 432,321, filed on December 9, 2016, and Provisional Patent Application US Serial Number 62 / 551,130, filed on August 28, 2017, each of which is incorporated herein by reference in its entirety. TECHNICAL FIELD [002] The modalities of the present invention generally refer to multiplayer computer games and, more specifically, to systems and methods for creating, transmitting and viewing 3D content. FUNDAMENTALS [003] There are a number of tools that allow video game players to play multiplayer games online in real time through which multiple video players distributed over a network interact with the same video game at the same time. In addition, there are tools for non-gaming users to watch the game and convey their views of the game along with comments. These non-playing users are referred to as hosts or controllers and their broadcast is a broadcast of the game. Many third-party users can tune in and watch selected games through Internet sites such as Twitch® and YouTube®. [004] Unfortunately for third-party viewers, the cinematic quality of the broadcast is often very poor in relation to the camera's stability, Petition 870190074105, of 08/01/2019, p. 7/59 2/44 plug, camera cuts and more. Controllers often lack training in cinematography and have very limited tools at their disposal to capture the action of games. The most popular tool is a camera controlled by a computer mouse, but this tool often provides choppy views. Most often, the controller has no vision control and must create the cast using the player's camera (for example, the camera view used by the player at any given time), which is very difficult to watch. Spectators watching the action directly from the game (for example, without a controller) have no choice but from one of the player's cameras. In addition, another drawback of traditional game broadcasts is that the output is a standard video stream visible on video devices that have no ability to control cinematography or provide significant feedback from viewers other than views, likes and comments. BRIEF DESCRIPTION OF THE DRAWINGS [005] Several of the accompanying drawings merely illustrate exemplary modalities of the present disclosure and cannot be considered to limit its scope. Additional features and advantages of the present disclosure will become evident from the following detailed description, considered in combination with the accompanying drawings, in which: Figure 1 is a diagram of components of an exemplary eSport system that includes an eSport device and associated peripherals, according to an embodiment; Petition 870190074105, of 08/01/2019, p. 8/59 3/44 Figure 2A illustrates the eSport system in an exemplary network through which a multiplayer computer game is provided online (for example, an eSport game), according to one modality; Figure 2B illustrates the eSport system in an exemplary network through which a transmission of a real event is provided, according to a modality; Figures 3A and 3B illustrate an exemplary method for creating high definition broadcast content from eSport games for distribution and presentation to viewers via video sharing sites, according to one modality; Figures 3C and 3D illustrate an exemplary method for displaying and controlling cinematography for high definition broadcast content for eSport events, according to one modality; Figure 4 is a block diagram that illustrates an exemplary software architecture, which can be used in conjunction with the various hardware architectures described in this document, according to one modality; and Figure 5 is a block diagram illustrating components of a machine, according to some exemplary modalities, capable of reading instructions from a machine-readable medium (for example, a machine-readable storage medium) and executing any one or more of the methodologies discussed in this document, according to one modality. [006] The headings provided in this document are for convenience only and do not necessarily affect the scope or meaning of the terms used. Similar numbers Petition 870190074105, of 08/01/2019, p. 9/59 4/44 in the figures indicate similar components. DETAILED DESCRIPTION [007] The following description includes systems, methods, techniques, instruction sequences and computer machine program products that embody illustrative modalities of the disclosure. In the following description, for the purpose of explanation, numerous specific details are presented in order to provide an understanding of various modalities of the subject of the invention. It will be evident, however, to those skilled in the art, which modalities of the subject of the invention can be performed without these specific details. In general, well-known instructional cases, protocols, structures and techniques are not necessarily shown in detail. [008] The systems and methods described in this document provide a means to create high quality cinematic video output from a real-time (or at least near real-time) gaming environment where the video output contains metadata that can be used by the viewer to enrich the viewing experience. In addition, the systems and methods provided in this document describe a player component configured to display game broadcasts to viewers, provide them with tools to allow them to interact with metadata, and collect data from viewers to provide feedback for the game and the game controller in real time. [009] The systems and methods described in this document allow a controller to create real-time game streams from the gaming environment they leverage Petition 870190074105, of 08/01/2019, p. 10/59 5/44 audio, video and game metadata to provide high cinematic quality and enrich the viewing experience. In addition, the systems and methods provided in this document describe a visualization module configured to display game broadcasts to viewers, provide viewers with tools to allow viewers to interact with metadata, and collect data from viewers to provide feedback for the game and the real-time game controller. According to another modality, the visualization module is configured to display a previously recorded game or previously recorded game broadcast, provide viewers with tools to allow viewers to interact with metadata, and collect data from viewers to provide feedback for the game. According to another modality, the visualization module is configured to display a 3D environment (for example, from 3D environment data), provide viewers with tools to allow viewers to interact with the metadata and the 3D environment, and collect data viewers to provide feedback to the 3D environment data generator. 3D environment data can include live 3D model data, live 3D rendering (e.g., live 3D photogrammetry) from a real environment, previously saved 3D model data, and previously saved 3D rendering (e.g., recorded 3D photogrammetry) ) of a real environment. Throughout the description in this document, the term eSport generally refers to 3D content, which includes data for networked multiplayer games generated in real time, so a plurality of video game players distributed by Petition 870190074105, of 08/01/2019, p. 11/59 6/44 a network interacts with the same video game at the same time (for example, online amateur game and competitive online professional game), and the term includes 3D environment data. [0010] Figure 1 is a component diagram of an exemplary eSport device 102 and associated peripherals. In the exemplary embodiment, the eSport 102 device is a computational device operated by a user 110. User 110 can be a player in an online multiplayer game (for example, an eSport game), or a broadcast controller (or controller only) that provides various streaming functions associated with the eSport game, or a third viewer of the broadcast. The eSport 102 device includes one or more display devices 104 (e.g., conventional computer monitors, VR wearable devices, etc.) and one or more input devices 106 (e.g., keyboard, mouse, portable or wearable pointing devices, camera device, motion tracking device, etc.). The eSport 102 device also includes memory 120, one or more central processing units (CPUs) 122, one or more graphics processing units (GPUs) 124, and one or more adapters 126 (for example, wired network adapters or wireless devices that provide network connectivity used for eSport gaming). [0011] In the exemplary mode, the eSport 102 device includes a game engine 130 (for example, run by CPU 122 or GPU 124) that presents the eSport game to user 110. Game engine 130 includes an eSport module 140 that provides several features of Petition 870190074105, of 08/01/2019, p. 12/59 7/44 transmission to the eSport game as described in this document. The eSport module 140 includes a live game module 142, a transmission module 144, a display module 146 and a cinema module 148, each implemented within, or otherwise communicating with, the game engine 130 Each of the live game module 142, transmission module 144, display module 146 and cinema module 148, as well as game engine 130, includes computer-executable instructions resident in memory 120 that are executed by the CPU 122 or GPU 124 during operation. Game engine 130 communicates with display devices 104 and also with other hardware such as input device (s) 106. Live game module 142, broadcast module 144, display module 146 and the cinematographic module 148, or the entire eSport module 140, can be integrated directly within the game engine 130, or can be implemented as an external piece of software (for example, a plug or other independent software). In addition, although live game module 142, broadcast module 144, display module 146 and cinematographic module 148 are shown as separate modules in Figure 1, in practice these modules can be combined in any way and may not be implemented as discrete code modules, but integrated directly into the code for the game module. [0012] In the exemplary modality, the live game module 142 provides a series of tools with which the user 110 can watch and participate in a Petition 870190074105, of 08/01/2019, p. 13/59 8/44 live or recorded online video game session (or only eSport session) and a real live or recorded session or event with 3D environment data (for example, a real football game recorded with real photogrammetry), with one or more other 110 users and aguele provides functionality for the eSport system as described in this document. According to an embodiment, the transmission module 144 provides a series of tools with which the user 110 can record and transmit (for example, including metadata) aspects of the online eSport session (or recorded eSport session) according to the eSport system as described in this document. User 110 (for example, controller) can either be a full participant in the eSport game, or be a spectator of the eSport game. Metadata includes data that can be used by live game module 142 and display module 146 to provide functionality to user 110 as described in this document. The transmission module 144 can also provide the user with 110 cinematographic tools (for example, using the cinematographic module 148) to allow the user to control one or more cameras using advanced camera techniques. [0013] According to one modality, the display module 146 provides the user with 110 tools to display and interact with game data from the online eSport session (or recorded eSport session), including data and video from the session created by others 110 users (for example, using broadcast module 144, display module 146, cinema module 148 or live game module 142) and to collect viewer metadata Petition 870190074105, of 08/01/2019, p. 14/59 9/44 of user 110 during viewing (for example, including real-time data regarding user actions or behavior, such as game level being viewed, game objective (s) being viewed, camera being used for viewing, event being followed, viewing duration from a specific camera or from a specific camera angle, player being viewed, battle being viewed, shot composure, mouse position, and more). From this viewer data, the eSport module can determine which cameras viewers prefer, how quickly they switch cameras, what angles they prefer for the start of the game as opposed to the end of the game, and what angles they prefer when any given scenario is taking place. The collected viewer data can be referred to in this document as viewer metadata. In some embodiments, the display module 146 may use a camera device (not shown separately) to capture a video feed from user 110 (for example, a viewer of broadcast content) and track eye movement and eye movement. user's head 110 during the game (for example, approaching where the viewer is focusing his or her eye (s) at a particular time) in order to add the video feed and / or the eye movement data tracked to the viewer metadata. According to some modalities, tracked eye movements or head movements can be used to control a camera within the eSport session to allow a head tracking camera mode similar to a viewing device mounted on the Petition 870190074105, of 08/01/2019, p. 15/59 10/44 virtual reality head or a viewer device mounted on the augmented reality head that gives the user the ability to directly control a camera movement with a movement of the head (or eyes). In some embodiments, the display module 146 can analyze the viewer's video feed to determine viewers' facial expressions during the game. Some viewer metadata can be transmitted back to an eSport 102 device used by a player in the eSport session, and can be presented to the player for the purpose of informing the player about the viewer (for example, to inform the player that 30 % of viewers are watching your battle, or that a viewer looks more at player X, or that the viewer prefers to watch battles from a drone camera, or that the viewer switches camera views with an average time interval seconds), or that the viewer has a specific reaction (for example, surprise, horror, fun, sadness, etc.). In some embodiments, viewer metadata can be transmitted over the network to a central repository (for example, a database not shown separately in the figure) for storage and future use (for example, by advertisers and developers). In some embodiments, the display module 146 can capture, record and stamp the viewer metadata, game data and camera metadata with date and time. According to one modality, viewer metadata stamped with date and time are aligned with game data and camera metadata stamped with date and time. The data stamped with date and Petition 870190074105, of 08/01/2019, p. 16/59 11/44 hours can later be used by developers and advertisers to determine relationships between data that include defining capture windows for any given answer (for example, a question such as what was in the metadata stream a second from both sides of response X , where X represents any measurable viewer response, could be transmitted to a database containing data stamped with date and time) and viewer reactions to game events, camera events and specific controller actions. [0014] According to one modality, the cinema module 148 provides the live game module 142, the transmission module 144 and the visualization module 146 with a set of cinematographic tools to display and record the eSport session online. Details on how the cinematographic module 148 provides the tools are provided in this document with the description of the eSport system. According to one modality, cinematographic tools include tools to create, position, orient and change the properties of virtual cameras in the game environment. The tools may include a graphical user interface for users to control cinematographic features implemented by the cinematographic module 148, an autocinematography tool to perform part or all of the cinematography functions in an automated mode, and a set of cinematography features (for example, through an application interface or API), as well as a tool for creating and embedding cinematographic metadata in a device output. According to one modality, the tools Petition 870190074105, of 08/01/2019, p. 17/59 12/44 cinematography works on high-level mechanisms that allow desired shots without a user directly triggering the camera (for example, using the joystick or mouse). The mechanisms include procedural composition, decision making, collision avoidance and dynamic monitoring behaviors. [0015] Figure 2A illustrates the eSport 200 system in an exemplary network 280 through which a multiplayer computer game is provided online (for example, an eSport game). In the example, the eSport 200 system includes a transmission device 210, two devices for players 220A, 220B (collectively devices for players 220), a display device 230, video sharing sites 240, online game servers 250 and a service of online rendering 260, each communicating on the shared network 280 (for example, the Internet). The 280 network includes both wired and wireless networks. The transmission device 210, the player devices 220 and the display device 230 may be similar to the eSport 102 device. The number of game players, controllers and spectators may vary. Online game servers 250 include a game server module (not shown separately in the figures) which may be similar to game engine 130, but which is specifically configured to provide game server functionality. The online rendering service 260 can be an Internet-based service that provides graphics rendering services also known as cloud rendering or network rendering. Petition 870190074105, of 08/01/2019, p. 18/59 13/44 [0016] In the exemplary mode, when the eSport session is an online video game, the online game can be configured using a client-server methodology for online games where the online game server 250 runs an official version of the game and the customer (for example, the live game module 142 on the display devices 220) runs a local version of the game (for example, via game engine 130). Player devices 220 and game server 250 communicate over network 280 exchanging game data during the eSport session to create a real-time gaming environment for players 222A, 222B (collectively, players 222). The online game servers 250 collect game data from all 222 players through the live game module 142 of the 220 player devices and may have an official version of the game. The live game module client 142 runs a local version of the game (for example, on each player device) and accepts data from the game server 250 (for example, including other players 222 game data) to update the local version of the game using the server data as the official version so that the server data replaces the local data in case of a discrepancy. [0017] In an exemplary embodiment, the eSport 140 module modules that are active on each of the devices 210, 220, 230 are shown in Figure 2A for the purpose of illustrating the primary functions of the devices 210, 220, 230. It should be understood, however, that any of the various modules described in this document may be operating on any of devices 210, 220, 230. Petition 870190074105, of 08/01/2019, p. 19/59 14/44 example, during operation, the devices of players 220A, 220B are operated by players 222A, 222B, respectively, while playing the eSport game, and the online game module 142 is active on each device (for example, to communicate online game servers 250 and provide player 222 with the game environment and to allow player 222 to interact with the game). A controller 212 operates the transmission device 210 to deliver transmission content for the eSport game (for example, to multiple viewers 232, 242), and the transmission module 144, the display module 146 and the cinema module 148 are active in the transmission device 210. The transmission module 144, the display module 146 and the cinema module 148 are active in order to provide the controller with the tools to create the transmission content. Viewer 232 operates the display device 230 to consume the transmission content generated by the controller 212 through the display module 146 and the cinema module 148 active in the display device 230. The display module 146 and the cinema module 148 are mainly active with the purpose of providing the 232 spectator with a visualization of the game and some cinematic control of the cameras. This is similar to 242 viewers watching from video sharing sites 240. [0018] In the exemplary mode, the transmission content provided by the controller 212 (for example, by means of the transmission device 210) can be presented to viewers 242 through various sites Petition 870190074105, of 08/01/2019, p. 20/59 15/44 video sharing 240. Video sharing sites 240 may include online content providers such as YouTube®, Twitch®, or other Internet video sharing sites. As shown in Figure 2A, video sharing sites 240 may also include the eSport 140 module or components of the eSport 140 module, such as display module 146 or cinema module 148, which can be used to present content broadcasting and other features to viewers 242. For example, cinema module 148 can be implemented within video sharing site 240 (for example, a plug) in order to provide viewers with cinematographic tools to control the viewing of a game eSport. In some embodiments, the visualization tools and cinematographic tools would only be visible in the user interface of the video sharing website 240 if the game metadata existed in the video content to be displayed. Cinematographic module 148 uses game metadata to perform cinematic functionality. According to one embodiment, the display module 146 in the eSport module 140 on a video sharing website 240 can perform some or all of the rendering locally on the user's device. According to another modality, the user's cinematographic viewing choices are rendered remotely (for example, in a cloud rendering service) and when necessary (for example, on the fly), and the rendered video is transmitted to the user via a 240 video sharing site. Petition 870190074105, of 08/01/2019, p. 21/59 16/44 according to yet another modality, a fixed number of procedural cameras is active within the eSport session (recorded or live eSport session) and the visualization from each procedural camera is rendered remotely (for example, in a service rendering) and transmitted to the video sharing website 240 as a rendered video stream so that the viewer can choose (for example, with the viewing module 146 or the cinematographic module 148) which of the rendered video streams to watch . The fact that there are procedural cameras with cloud rendering allows devices with poor rendering capacity (for example, mobile phones or old computers) to see high rendering quality while still controlling the cameras. [0019] Figure 2B illustrates the eSport 200 system in an exemplary network 280 through which a real event is provided (for example, a live or recorded sporting event, or a live or recorded non-sporting event). In the example, the eSport 200 system includes a transmission device 210, two display devices 230A, 230B (collectively, display devices 230), an online game server 250 and a database 290, each communicating on the shared network 280 (for example, the Internet). The eSport 200 system is connected to an external 3D video recording system 252 that provides 3D video and data from a real event 254. The transmitting device 210 and the viewing device 230 can be similar to the eSport 102 device. 210 and viewer controllers Petition 870190074105, of 08/01/2019, p. 22/59 17/44 (232, 242) may vary. Online game servers 250 include a game server module (not shown separately in the figure) that may be similar to game engine 130, but which is specifically configured to provide game server functionality such as transmitting 3D data from a live or recorded event. The 3D video recording system 252 is configured to generate 3D environment data from a real event 254 to be transmitted to viewers and controllers 210 through online game servers 250. The 3D video recording system 252 is also configured to record the 3D environment data in database 290 for future retransmission. The 3D video recording system 252 can generate 3D environment data in many ways; for example, the system 252 can use one or more special cameras to directly record 3D data from the environment or use a plurality of standard video cameras to record the event 254 from different angles and then use a method to generate 3D data from the multiple video streams. [0020] In the exemplary mode, the modules of the eSport 140 module that are active in each of the devices 210, 230, 240 are shown in Figure 2B for the purpose of illustrating the primary functions of the devices 210, 230, 240. It should be understood however, that any of the various modules described in this document may be operating on any of the devices 210, 230, 240. For example, during operation, a controller 212 operates the transmission device 210 to deliver transmission content for the eSport session ( for example, for multiple viewers 232, 242), and the broadcast module Petition 870190074105, of 08/01/2019, p. 23/59 18/44 144, the display module 146 and the cinema module 148 are active in the transmission device 210. The transmission module 144, the display module 146 and the cinema module 148 are active in order to provide the controller with the tools to create the broadcast content. Viewer 232 operates display device 230 to consume the broadcast content generated by controller 212 and directly from online servers 250, and display module 146 and cinema module 148 are active. The display module 146 and the cinematographic module 148 are mainly active for the purpose of providing the viewer 232 with a visualization of the actual game 254 and some cinematic control of the cameras. This is similar to 242 viewers watching from video sharing sites 240. [0021] Figures 3A and 3B illustrate an exemplary method for creating high-quality broadcast content from eSport games for distribution and presentation to viewers 242 via video sharing sites 240 and viewers 232 on a viewing device 230. In modality For example, method 300 is performed by the eSport 200 system in the networked environment illustrated in Figure 2A and Figure 2B. Method 300 is executed as players 222 actively play the game online using live game modules 142 on their respective display devices 220. In operation 310, live game module 142 records and transmits user input from 106 user input devices (for example, Petition 870190074105, of 08/01/2019, p. 24/59 19/44 example, from joystick, mouse, head mounted displays, manual tracking devices, etc.) to game server 250. Game server 250 uses game data received from all players 222 to build the official game version and then to distribute game data from this official game back to all 220 user devices. Live game module 142 executes game code locally to implement client-side prediction for the purpose of reducing network latency effects that occur with communication between client and server. The live game module 142 also executes game code to integrate game data from server 250 (for example, game data from other remote devices from players 220 and the official game on server 250) with local game data and display the combined data on the display device 104. The live game module 142 also receives viewer metadata from the game server 250 and displays this data to player 222. [0022] In operation 312, in the exemplary mode, the transmission module 144 in the transmission device 210 receives game data (including all game metadata and spectator metadata) from the game server 250. The transmission module 144 receives the game data (including game metadata) and spectator metadata in operation 312 and uses the game data from online game servers 250 to create and display a local version of the game to controller 212 in operation 314. The transmission module 144 uses game data to create and present a game-wide environment in action. The module of Petition 870190074105, of 08/01/2019, p. 25/59 20/44 broadcast 144 displays broadcast camera control tools through an interface to control a virtual camera operator (for example, through the cinematographic module 148) to create and target camera shots. Cinema module 148 and broadcast module 144 use game metadata to create and populate camera control tools. In addition, viewer metadata is displayed to controller 212 and other players 222 giving feedback to viewers 232, 242. Transmission module 144 displays transmission tools to controller 212 through a graphical user interface (GUI). The transmission tools allow the controller 212 to control the cameras within the game environment, create (for example, record) and transmit (for example, broadcast) a customized version (for example, including the controller's camera control data) eSport session over the 280 network. The tools include user interface (UI) elements that allow the 212 controller, for example, view the eSport session from an existing game camera, create new in-game cameras to view the eSport session from any position, and activate a cinematographic module 148 to help control the cameras. Camera positions and screen compositions can be completely or partially controlled by the cinematographic module 148. Throughout the description in this document, the terms composition and compositional refer to the placement or arrangement of visual elements in a screen shot (for example, in an eSport or video game Petition 870190074105, of 08/01/2019, p. 26/59 21/44 a 3D scene). The broadcast module 144 uses the game metadata and the cinematographic module 148 to control the cameras in the game for the purpose of creating high quality video from the eSport session. The cinematographic module 148 uses automatic composition to compose shots of the game action using directions from the controller. The automatic composition used by the cinematographic module 148 can be based on rules (for example, using rules of cinematographic composition) and can be controlled by artificial intelligence. According to a modality, each game would have data and instructions that define the types of angles and sockets available for each character and scenario (for example, the data and instructions could be determined by a game developer when creating the game). After playing, the controller would be able to select from the different shooting styles available - close-up, wide angle, drone jack, security camera, tracking camera, etc., and from different subjects or events - character A, fatal shot, leveling up, running, etc. The controller would use transmission device 210 for fast transmission of camera type and subject / event that would be performed by cinema module 148 to produce a good shot in any situation. In operation 316, transmission module 144 also records real-time video and audio comments from the controller to the eSport session video. In operation 318, transmission module 144 creates an output for the session that includes the position and properties of all recording cameras (referred to below as camera metadata) at all times along with audio comments and Petition 870190074105, of 08/01/2019, p. 27/59 22/44 real-time video from the controller and that includes game data. In some embodiments, the transmission module output may include all camera positions and orientations and lens settings for each frame. In some modes, higher-level camera commands can be transmitted (for example, close-up shot on player X, wide-angle shot on player Y). In this way, the cinema module 148 can be used to process higher level commands. In some embodiments, the online rendering service 260 may include the cinematographic module 148 in order to process higher level commands. In some embodiments, the online game server 250 can transmit game data directly to the online rendering service 260, thereby reducing latency. [0023] In the example mode, in operation 320, transmission module 144 packages the output and transmits data over network 280 to the online rendering service 260. In operation 322, the rendering service 260 uses game data and metadata camera to render a broadcast video (for example, a video stream) of the game using the compositional camera shots that were chosen by controller 212 and created by cinema module 148. In operation 324, the rendered broadcast video and data games are transmitted to the video sharing service 240 to be displayed to viewers 242. In operation 326, a display module in the video sharing service (not shown separately, but may be similar to the module Petition 870190074105, of 08/01/2019, p. 28/59 23/44 viewing 146) displays game metadata along with the broadcast video and collects viewer metadata while the video is being displayed. Video sharing service 240 receives and displays the broadcast video and game metadata (for example, including all controller metadata). Visualization module 146 uses metadata to display information not traditionally available in the video feed of a multiplayer game (for example, game version, level, active characters on the current screen, mission progression, game state, hours played , shopping, etc.). The cinematographic module 148 can use metadata to display information such as, for example, current subject, animations on the screen, type of lens (for example, tele, normal, wide angle), camera angle, etc. In operation 327, the visualization module synchronizes the game data and spectator metadata. [0024] In operation 328, in the exemplary modality, the display module 146 in the video sharing service 240 also collects data from viewers through a user interface and transmits the data (for example, viewer metadata) over the 280 network in real time to online game servers 250 for distribution to eSport devices (including devices for players 220, streaming devices 210, display devices 230 and display module 146 on video sharing sites 240) for game controllers 212 and players 222. In some modes, the online game server 250 also collects and stores spectator metadata for Petition 870190074105, of 08/01/2019, p. 29/59 24/44 further processing. The live game module 142 on player devices 220 can use spectator metadata to display user information during a session and to influence the game. Similarly, the live game module 142 on the transmission device 210 can use spectator metadata to display spectator information during a session and to influence the recording of controller 212 of the eSport session. [0025] In some embodiments, the rendering services provided by the online rendering service 260 can be provided by a local rendering module (not represented separately) on the transmission device 210. In other words, operations 320, 322 and 324 can be executed locally on the transmission device 210 by the local rendering module. [0026] In some embodiments, the transmission content generated by method 300 can be viewed locally by controller 212. In other words, operations 324, 326, 327 and 328 can be performed locally on transmission device 210 by display module 146 In this way, the controller 212 also acts as the viewer 242 and 232. In such an event the controller 212 is similar to a viewer 232 and the transmission device 210 is similar to the viewer device 230 so that the viewer 232 can directly control the cameras with the cinematographic module 148 and directly view the resulting video. [0027] In some modalities, the human controller 212 can be replaced by an automatic version of Petition 870190074105, of 08/01/2019, p. 30/59 25/44 cinema module 148 executed on the transmission device 210 and executing operations 314 and 316. In other words, the automatic cinema module 148 receives game data and viewer metadata from the game server 250. The automatic cinema module 148 uses game data and viewer metadata to create a local version of the game (for example, using a live game module 142 on the streaming device). The automatic cinematographic module 148 contains an artificial intelligence (AI) controlled camera operator that uses artificial intelligence (for example, machine learning and neural networks) to choose and compose shots for the purpose of creating a broadcast video. The automatic cinematographic module 148 creates an output for the session that includes the position and properties of the recording camera at each moment (for example, camera X with properties for time 1, camera X with properties for time 2, camera Y with properties for time 3, etc.). Transmission module 144 packages camera metadata with game data and transmits data over a network to an online rendering service 260. Render service 260 uses game data and camera metadata to render video from game transmission based on the instructions of the automatic cinematographic module 148. The broadcast video and game data are transmitted to the video sharing service 240 to be displayed to viewers 242. A special viewing module on the sharing service website of videos displays game metadata along with the broadcast video and collects viewer metadata Petition 870190074105, of 08/01/2019, p. 31/59 26/44 While The video is being displayed. In a deal with The operation 328 , viewer metadata are transmitted The leave of module of preview 146 in the website in video sharing 240 to the automatic cinema module 148 on the broadcast device 210. The automatic cinema module 148 on the broadcast device 210 can use viewer metadata to influence the recording of the eSport session. [0028] According to one modality, when the automatic cinematographic module 148 receives game data from game server 250, the automatic cinematographic module 148 delays the output (for example, for the rendering service 260). The delay can be any amount of time, but a typical delay would be 2 or 3 seconds. The result of the delay is that the automatic cinematographic module 148 has real-time data from the game server 250 while any viewers (for example, a controller 212 via a broadcast device 210, a viewer 242 via a sharing site of videos 240 and a viewer 232 by means of a viewing device 230) sees a slightly delayed version of the game video. This allows the cameras (via the automatic cinematographic module 148) to see in the future during the delay. During the delay, the automatic cinematographic module 148 searches the game data for events (for example, an explosion, death of a player, an ambush, shots fired at a player, etc.) and positions (for example, by means of positioning and composition) one or more cameras in the game environment to cover any events discovered Petition 870190074105, of 08/01/2019, p. 32/59 27/44 and then cut to a camera shot of the event before it happens (for example, before it is seen by a viewer / controller) and therefore creating a properly positioned shot composed of events that are about to happen from from the viewer's perspective. From the point of view of a spectator / controller, the camera always arrives before the action of the event and provides viewers with a good view of the action that surrounds the event. [0029] The delay allows game developers to prioritize any key game events (for example, events that are defined as important by the developers during game creation and events that are determined to be important by artificial intelligence throughout the game) within the your game. The delay is also useful when watching a real event in real time (for example, football game, hockey game, football game, including real live non-sporting events) since the delay allows artificial intelligence or commentators of live events determine or mark actions specific as important. It also allows what 232 spectators, 242 or controllers 212 choose if the module cinematographic automatic 148 must search per events and actions of game and adjust The cinematography properly, or ignore the events and actions of the game. In other words, the automatic cinematographic module 148 can have a delay mode whereby having the delay mode on would imply having the film from module 148 (for example, position, composition and cutting cameras) and displaying a game of according to the events and actions of the game. For example, during a real live game or event, a Petition 870190074105, of 08/01/2019, p. 33/59 28/44 viewer 232, 242 or controller 212 can choose their preferred camera types (for example, Close-up on player # 3) for standard shooting and the automatic cinematographic module 148 will leave this type of standard camera if an event or action high priority occur. Alternatively, having the delay mode turned off will cause module 148 to ignore events or actions while filming the game. The delay is useful because one of the biggest issues with eSports viewing is how to best present the game to all viewers. [0030] Figures 3C and 3D illustrate an exemplary method for controlling cinematic viewing of high-quality broadcast content from an eSport event (for example, a football game) on a 230 viewing device. In this exemplary embodiment, the method 380 is performed by the eSport 200 system in the network environment illustrated in Figure 2B. In operation 340, the eSport 200 system receives game data (including 3D video and data) from an external 3D video recording system 252 where the game data is from the recording of a real event 254. In the modality, the term data game includes data and 3D video of the actual event 254. In operation 342, the display module 146 receives the game data. In operation 344, the visualization module 146 uses the game data to create and display a 3D representation of the actual event. The 3D representation can have a high or low polygon count mesh and can have high or low resolution texturing. The visualization module 146 displays camera control tools to the viewer through a Petition 870190074105, of 08/01/2019, p. 34/59 29/44 user. The control tools are used by the viewer to control an amateur camera operator to create and direct camera shots for the purpose of creating a video. In operation 346, the viewer uses the tools (for example, procedural cameras) to create instructions for recording a video of the 254 event. The camera positions, camera cuts and composition are either completely or partially controlled by the 148 cinema module. operation 348, display module 146 creates an output for the event that includes the position and properties of all recording cameras at any given time (for example, camera data) and also includes 3D data. In operation 350, the display device uses 3D data from game data and camera data to render a video of the event based on the viewer's cinematic instructions. According to another modality, the display device packages and transmits game data and camera data to an external rendering service for rendering. In operation 352, the display module 146 displays the video and collects viewer metadata while the video is being watched. In operation 354, the visualization module synchronizes the game data together with the spectator metadata. In operation 356, viewer metadata is fed back to external system 252 (for example, in real time). [0031] Figure 4 is a block diagram illustrating an exemplary software architecture 402, which can be used in conjunction with the various hardware architectures described in this document. Figure 4 is an example Petition 870190074105, of 08/01/2019, p. 35/59 30/44 is not a limitative of a software architecture and it will be understood that many other architectures can be implemented to facilitate the functionality described in this document. Software architecture 402 can be run on hardware such as the machine 500 in Figure 5 which includes, among other things, processors 510, memory 530 and input / output (I / O) components 550. A representative hardware layer 404 is illustrated and can represent, for example, machine 500 of Figure 5. Representative hardware layer 404 includes a processing unit 406 that has executable instructions 408. Executable instructions 408 represent executable instructions for software architecture 402, including implementation methods, modules, etc. described in this document. The hardware layer 404 also includes memory and / or storage modules shown as memory / storage 410, which also have executable instructions 408. The hardware layer 404 can also comprise other 412 hardware. [0032] In the exemplary architecture of Figure 4, software architecture 402 can be conceptualized as a stack of layers where each layer provides specific functionality. For example, software architecture 402 can include layers such as an operating system 414, libraries 416, structures or middleware 418, applications 420 and a presentation layer 444. Operationally, applications 420 and / or other components within the layers can invoke 424 application programming interface (API) calls through the software stack and receive a response as messages Petition 870190074105, of 08/01/2019, p. 36/59 31/44 426. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special-purpose operating systems may not provide the 418 framework / middleware, while others may provide such a layer. Other software architectures may include additional or different layers. [0033] The operating system 414 can manage resources and provide common services. The operating system 414 can include, for example, a core 428, services 430 and triggers 432. Core 428 can act as an abstraction layer between the hardware and the other software layers. For example, core 428 may be responsible for memory management, processor management (eg scheduling), component management, networking, security settings, etc. The 430 services can provide other common services for the other layers of software. The 432 drivers can be responsible for controlling and interfacing with the underlying hardware. For example, 432 triggers can include screen triggers, camera triggers, Bluetooth® triggers, flash memory triggers, serial communication triggers (for example, Universal Serial Bus (USB) triggers), Wi-Fi® triggers , audio triggers, power management triggers, etc., depending on the hardware configuration. [0034] Libraries 416 can provide a common infrastructure that can be used by 420 applications and / or other components and / or layers. 416 libraries typically provide functionality that allows Petition 870190074105, of 08/01/2019, p. 37/59 32/44 other software modules perform tasks in an easier way than by directly interfacing with the functionality of the underlying operating system 414 (for example, core 428, services 430 and / or triggers 432). Libraries 416 may include system libraries 434 (e.g., standard C library) that can provide functions such as memory allocation functions, string manipulation functions, mathematical functions, etc. In addition, 416 libraries can include API 436 libraries such as media libraries (for example, libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG and PNG) , graphics libraries (for example, an OpenGL structure that can be used to render 2D and 3D graphics content on a screen), database libraries (for example, SQLite that can provide several relational database functions), Internet libraries (for example, WebKit that can provide Internet browsing functionality), and the like. 416 libraries can also include a wide variety of other 438 libraries to provide many other APIs to 420 applications and other software components / modules. [0035] 418 frameworks (also sometimes referred to as middleware) provide a higher level common infrastructure that can be used by 420 applications and / or other software components / modules. For example, 418 frameworks / middleware can provide various graphical user interface (GUI) functions, high-level resource management, high-level location services, etc. The structures / middleware Petition 870190074105, of 08/01/2019, p. 38/59 33/44 418 can provide a wide spectrum of other APIs that can be used by 420 and / or other applications components / modules software , some of which can be specific to a system operational or platform private. [0036] The apps 420 include apps 440 and / or third-party applications 442. Examples of representative 440 embedded applications may include a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application and / or a game app, but are not limited to these. 442 third-party applications may include an application developed using the Android ™ or iOS ™ software development suite (SDK) by an entity other than the vendor of the specific platform, and may be mobile software that runs on a mobile operating system such as iOS ™, Android ™, Windows® Phone, or other mobile operating systems. Third-party applications 442 can invoke API calls 424 provided by the mobile operating system such as operating system 414 to facilitate the functionality described in this document. [0037] Applications 420 can use built-in operating system functions (for example, core 428, services 430 and / or triggers 432), libraries 416, or structures / middleware 418 to create user interfaces to interact with system users. Alternatively or additionally, in some systems, interactions with a user can occur through a layer of Petition 870190074105, of 08/01/2019, p. 39/59 34/44 presentation, just like the presentation layer 444. In these systems, application / module logic can be separated from aspects of the application / module that interacts with a user. [0038] Some software architectures use virtual machines. In the example in Figure 4, this is illustrated by a 448 virtual machine. Virtual machine 448 creates a software environment where applications / modules can be run as if they were run on a hardware machine (such as machine 500 in Figure 5, for example). Virtual machine 448 is controlled by a controller operating system (for example, operating system 414 in Figure 4) and typically, although not always, has a virtual machine monitor 446, which manages the operation of virtual machine 448 as well as the interface with the controller operating system (for example, operating system 414). A software architecture is executed inside the virtual machine 448 such as an operating system (OS) 450, libraries 452, structures 454, applications 456 and / or a presentation layer 458. These layers of software architecture that run inside the machine virtual 448 can be the same as the corresponding layers previously described or can be different. [0039] Figure 5 is a block diagram illustrating components of a machine 500, according to some exemplary modalities, capable of reading instructions from a machine-readable medium (for example, a machine-readable storage medium) and executing any one or more of the methodologies discussed in this Petition 870190074105, of 08/01/2019, p. 40/59 35/44 document. Specifically, Figure 5 shows a diagrammatic representation of machine 500 in the exemplary form of a computer system, within which instructions 516 (for example, software, a program, an application, a applet, an application, or other executable code) to make the machine 500 executes gualguer one or more of the methodologies discussed in this document can be executed. In this way, instructions 516 can be used to implement modules or components described in this document. Instructions 516 transform the general unscheduled machine 500 into a specific machine 500 programmed to perform the functions described and illustrated in the described manner. In alternative embodiments, the machine 500 operates as an independent device or can be coupled (for example, in a network) to other machines. In a networked implementation, the machine 500 can operate on the same quality as a server machine or a client machine in a customer network environment, or as a machine in a peer (or distributed) network environment. The machine 500 may comprise a server computer, a client computer, a personal computer (PC), a tablet computer, a portable computer, a netbook, a converter box (STB), one assistant personal digital (PDA), a system in media from entertainment, a phone cell phone, one telephone smart device, a mobile device, a wearable device (for example, a smart wristwatch), a smart home device (for example, a smart home appliance), other smart devices, an Internet device, a wireless router Petition 870190074105, of 08/01/2019, p. 41/59 36/44 network, a network switch, a network bridge, or any machine capable of executing instructions 516, sequentially or otherwise, that specifies actions to be taken by machine 500, but is not limited to these. In addition, although only a single machine 500 is illustrated, the term machine should also be considered to include a collection of machines that individually or together execute instructions 516 to execute any one or more of the methodologies discussed in this document. [0040] Machine 500 can include processors 510, memory 530 and input / output (I / O) components 550, which can be configured to communicate with each other via a 502 bus. In an exemplary mode, 510 processors (for example, a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Processor Digital Signals (DSP), an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a 512 processor and a 514 processor that can execute instructions 516. The term processor is intended to include a multi-core processor that can comprise two or more independent processors (sometimes referred to as a core s) that can execute instructions at the same time. Although Figure 5 shows multiple processors, machine 500 can include a Petition 870190074105, of 08/01/2019, p. 42/59 37/44 single processor with a single core, single processor with multiple cores (for example, a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination of these. [0041] Memory 530 may include a memory, such as a main memory 532, a static memory 534, or other memory storage, and a storage unit 536, both accessible to processors 510 via bus 502. The memory unit storage 536 and memory 532, 534 store instructions 516 that incorporate any one or more of the methodologies or functions described in this document. Instructions 516 may also reside, completely or partially, within memory 532, 534, within storage unit 536, within at least one of the processors 510 (for example, within the processor cache memory), or any combination thereof, during its execution by machine 500. Consequently, memory 532, 534, storage unit 536 and processor memory 510 are examples of machine-readable media. [0042] As used in this document, machine-readable medium means a device capable of storing instructions and data temporarily or permanently and may include random access memory (RAM), read-only memory (ROM), buffer memory, flash memory, media optical media, magnetic media, cache memory, other types of storage (for example, Erasable Programmable Read Only Memory (EEPROM), and / or any combination of those, but is not limited to them. The term readable means Petition 870190074105, of 08/01/2019, p. 43/59 38/44 per machine should be considered to include a single medium or multiple media (for example, a centralized or distributed database, or associated caches and servers) capable of storing the 516 instructions. The machine-readable medium term should also be considered to include any medium, or combination of multiple media, that is capable of storing instructions (for example, 516 instructions) for execution by a machine (for example, machine 500), so that the instructions, when executed by one or more processors on machine 500 (for example, 510 processors), have machine 500 perform any one or more of the methodologies described in this document. Consequently, a machine-readable medium refers to a single storage device or device, as well as cloud-based storage systems or storage networks that include multiple storage devices or devices. The term machine-readable means excludes signs by itself. [0043] The input / output (I / O) 550 components can include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, etc. The specific input / output (I / O) components 550 that are included in a specific machine will depend on the type of machine. For example, portable machines such as mobile phones are likely to include a touch sensitive input device or other such input mechanisms, while a headless server machine is unlikely to include such a touch sensitive input device. It will be understood that the components of Petition 870190074105, of 08/01/2019, p. 44/59 39/44 input / output (I / O) 550 can include many other components that are not shown in Figure 5. The input / output (I / O) 550 components are grouped purely according to functionality to simplify the discussion a following and the grouping is not in any way limiting. In several exemplary embodiments, the input / output (I / O) components 550 may include output components 552 and input components 554. Output components 552 may include visual components (for example, a display such as a display panel plasma (PDF), a light-emitting diode (LED), a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (for example, speakers) , haptic components (for example, a vibrating motor, resistance mechanisms), other signal generators, etc. Input components 554 can include alphanumeric input components (for example, a keyboard, a touchscreen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components ( for example, a mouse, a keyboard, a track ball, a joystick, a motion sensor, or other pointing instruments), tactile input components (for example, a physical button, a touchscreen that provides location and / or force of touches or touch gestures, or other tactile input components), audio input components (eg, a microphone), and the like. [0044] In other exemplary modalities, the input / output (I / O) 550 components may include Petition 870190074105, of 08/01/2019, p. 45/59 40/44 biometric components 556, movement components 558, environmental components 560, or components of position 562 among a wide range of other components. For example, biometric components 556 can include components to detect expressions (for example, hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (for example, blood pressure, heart rate, blood temperature, body, sweating, or brain waves), identifying a person (for example, voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components 558 can include acceleration sensor components (for example, accelerometer), gravity sensor components, rotation sensor components (for example, gyroscope), etc. Environmental environmental components 560 may include, for example, lighting sensor components (for example, photometer), temperature sensor components (for example, one or more thermometers that detect ambient temperature), humidity sensor components, temperature sensor components pressure (for example, barometer), acoustic sensor components (for example, one or more microphones that detect background noise), proximity sensor components (for example, infrared sensors that detect nearby objects), gas sensors (for example, gas detection sensors to detect concentrations of gases dangerous for safety or to measure pollutants in the atmosphere), or other components that provide indications, measurements, or signals Petition 870190074105, of 08/01/2019, p. 46/59 41/44 corresponding to a surrounding physical environment. Position 562 components can include location sensor components (for example, Global Positioning System (GPS) receiver component), altitude sensor components (for example, altimeters or barometers that detect the atmospheric pressure from which it can be obtained altitude), orientation sensor components (for example, magnetometers) and the like. [0045] Communication can be implemented using a wide variety of technologies. Input / output (I / O) components 550 may include communication components 564 operable to couple the machine 500 to a network 580 or devices 570 by means of a coupling 582 and a coupling 572, respectively. For example, communication components 564 may include a network interface component or other device suitable for interfacing with the 580 network. In other examples, communication components 440 may include wired communication components, wireless communication components , cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (for example, Low Energy Bluetooth®), Wi-Fi® components, and other communication components to provide communication through other modalities. The 570 devices can be another machine or any of a wide variety of peripheral devices (for example, peripheral device coupled via Universal Serial Bus (USB)). [0046] In addition, the communication components 564 can detect identifiers or include components Petition 870190074105, of 08/01/2019, p. 47/59 42/44 operable to detect identifiers. For example, 564 communication components may include Radio Frequency Identification (RFID) tag reading components, Smart NFC tag detection components, Optical reading components (for example, an optical sensor for detecting one-dimensional bar codes such as such as Universal Product Code (UPC) barcode, multidimensional barcode such as Quick Response (QR) barcode, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS2D barcode , and other optical codes), or acoustic detection components (for example, microphones to identify tagged audio signals). In addition, a variety of information can be obtained through the communication components 564, such as location using Internet Protocol (IP) geolocation, location using triangulation of Wi-Fi® signals, location using detection of an NFC beam signal that can indicate a specific location, etc. [0047] Although an overview of the subject of the invention has been described with reference to specific exemplary modalities, several modifications and changes can be made to these modalities without departing from the broader scope of the modalities of the present disclosure. Such modalities of the subject of the invention may be referred to in this document, individually or collectively, by the term invention merely for convenience and without intending to voluntarily limit the scope of this patent application to any single disclosure or inventive concept if more than Petition 870190074105, of 08/01/2019, p. 48/59 43/44 one is actually revealed. [0048] The modalities illustrated in this document are described in sufficient detail to allow those skilled in the art to perform the revealed teachings. Other modalities can be used and obtained from those, so that structural and logical substitutions and changes can be made without departing from the scope of this disclosure. The Detailed Description, therefore, should not be considered in a limiting sense, and the scope of the various modalities is defined only by the attached claims, together with the full range of equivalents to which such claims are entitled. [0049] As used in this document, the term or can be considered in a sense both inclusive and exclusive. In addition, plural examples can be provided for resources, operations or structures described in this document as a single example. In addition, limits between different resources, operations, modules, motors and data stores are somewhat arbitrary, and specific operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of several modalities of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations can be implemented as a combined structure or resource. Similarly, structure and functionality presented as a single resource can be implemented as separate resources. These and other variations, modifications, additions and improvements fall within the scope of modalities Petition 870190074105, of 08/01/2019, p. 49/59 44/44 of the present disclosure as represented by the appended claims. The specification and drawings should, therefore, be considered in an illustrative rather than a restrictive sense.
权利要求:
Claims (20) [1] 1. System, characterized by comprising: one or more computer processors; one or more computer memories; one or more modules incorporated in one or more memories, the one or more modules configuring the one or more computer processors to perform operations to automatically generate a customized version of video session for presentation on at least one display device, the operations comprising: receiving game metadata generated from a live or recorded video feed, the game metadata including three-dimensional modeling data associated with the live or recorded video feed; receiving viewer metadata collected from a plurality of viewing devices, viewer metadata including information regarding a plurality of responses from a plurality of viewers for a presentation of the video session on a plurality of viewing devices; determine preferred cinematography of the plurality of spectators in relation to events that occur within the game based on spectator metadata; automatically create additional game metadata based on viewer metadata, the Petition 870190052027, of 6/3/2019, p. 28/36 [2] 2/9 additional game metadata including camera data associated with at least one of a new camera angle or a different camera angle to reflect the preferred cinematography; integrate additional game metadata into game metadata to adjust at least in near real time the cinematography of events for presentation on at least one display device. 2. System, according to claim 1, characterized by the fact that the creation of additional game metadata is performed by a user using camera control tools, the camera control tools allowing the user to control cameras and create new cameras within an environment modeled by three-dimensional modeling data. [3] 3. System, according to claim 2, characterized by the fact that the operations comprise still presenting information referring to the game metadata in a first region of a user interface and presenting information referring to spectator metadata in a second region of the user interface user. [4] 4. System, according to claim 1, characterized by the fact that the plurality of responses includes information regarding at least one of a game level being seen, a game object being seen, a camera being used for viewing, an event game being followed, or a reaction from a spectator. [5] 5. System according to claim 1, Petition 870190052027, of 6/3/2019, p. 29/36 3/9 characterized by the fact that the creation of the cinematography setting includes invocation of an automatic cinematographic feature of a cinematographic module, the automatic cinematographic feature including automatically configuring a shot composition during a delay inserted in a video session broadcast and cutting automatically trigger before the game event occurs during the presentation. [6] 6. System, according to claim 1, characterized by the fact that the video session is an eSport video session and the operations also include: receive data from collected game The from an plurality of devices in players, the game data representing an plurality in actions performed by a plurality of players associated with the plurality of player devices, each of the plurality of actions being an action performed by the plurality of players that changes an aspect of a game environment being played in the video session; and synchronize game data with game metadata for transmission to a game server. [7] 7. System, according to claim 1, characterized by the fact that the plurality of responses includes eye movement data corresponding to the plurality of users, and the operations also include synchronization of the eye movement data with the game metadata for each one of the plurality of responses. [8] 8. System according to claim 7, Petition 870190052027, of 6/3/2019, p. 30/36 4/9 characterized by the fact that the eye movement data identify at least one of the eye focus area and one eye focus game object of each of the plurality of players during the event. [9] 9. Method characterized by understanding: incorporate one or more modules into one or more computer memories through a computer-implemented implementation process, the one or more modules configuring the one or more computer processors to perform operations to automatically generate a customized version of video session for presentation on at least one display device, operations comprising: receiving game metadata generated from a live or recorded video feed, the game metadata including three-dimensional modeling data associated with the live or recorded video feed; receive viewer metadata collected from a plurality of viewing devices, viewer metadata including information regarding a plurality of responses from a plurality of viewers for a presentation of the session in video on a plurality in devices viewers;to determine cinematography preferred gives plurality of viewers regarding to events what occur within of the game with base Petition 870190052027, of 6/3/2019, p. 31/36 5/9 in the viewer metadata; automatically create additional game metadata based on viewer metadata, additional game metadata including camera data associated with at least one of a new camera angle or a different camera angle to reflect preferred cinematography; integrate additional game metadata into game metadata to adjust at least in near real time the cinematography of events for presentation on at least one display device. [10] 10. Method, according to claim 9, characterized by the fact that the creation of additional game metadata is performed by a user using camera control tools, the camera control tools allowing the user to control cameras and create new cameras within an environment modeled by three-dimensional modeling data. [11] 11. Method, according to claim 10, characterized by the fact that the operations comprise still presenting information referring to the game metadata in a first region of a user interface and presenting information referring to spectator metadata in a second region of the user interface. user. [12] 12. Method, according to claim 9, characterized by the fact that the plurality of responses includes information regarding at least one of a game level being seen, a game object being seen, a camera Petition 870190052027, of 6/3/2019, p. 32/36 6/9 being used for viewing, a game event being followed, or a viewer reaction. [13] 13. Method, according to claim 9, characterized in that the adjustment of the cinematography includes invocation of an automatic cinematographic feature of a cinematographic module, the automatic cinematographic feature including automatically configuring a composition of a shot during a delay inserted in a transmission of the video session and automatically cut off the shot before the game event occurs during the presentation. [14] 14. Method, according to claim 9, characterized by the fact that the video session is an eSport video session and the operations also include: receive data from collected game The from an plurality of devices in players, the game data representing an plurality in actions performed by a plurality of players associated with the plurality of player devices, each of the plurality of actions being an action performed by the plurality of players that changes an aspect of a game environment being played in the video session; and synchronize game data with game metadata for transmission to a game server. [15] 15. Non-transient, computer-readable medium, characterized by storing executable instructions per processor that, when executed by a processor, cause the processor to perform operations to automatically generate a customized version of video session Petition 870190052027, of 6/3/2019, p. 33/36 7/9 for presentation on at least one display device, operations comprising: receiving game metadata generated from a live or recorded video feed, the game metadata including three-dimensional modeling data associated with the live or recorded video feed; receive viewer metadata collected from a plurality of viewing devices, viewer metadata including information regarding an plurality of responses from an plurality of and spectators for a presentation of the session in video on a plurality in devices viewers;to determine cinematography preferred gives plurality of viewers regarding to events what occur within of the game with base in viewer metadata; automatically create additional game metadata based on viewer metadata, additional game metadata including camera data associated with at least one of a new camera angle or a different camera angle to reflect preferred cinematography; integrate additional game metadata into game metadata to adjust at least in near real time the cinematography of events for presentation on at least one display device. Petition 870190052027, of 6/3/2019, p. 34/36 8/9 [16] 16. Computer-readable non-transitory medium, according to claim 15, characterized by the fact that the creation of additional game metadata is performed by a user using camera control tools, the camera control tools allowing the user control cameras and create new cameras within an environment modeled by three-dimensional modeling data. [17] 17. Computer readable non-transitory medium, according to claim 16, characterized by the fact that the operations also include presenting information regarding game metadata in a first region of a user interface and presenting information regarding spectator metadata in a second user interface region. [18] 18. Computer readable non-transitory medium, according to claim 15, characterized by the fact that the plurality of responses includes information regarding at least one of a game level being seen, a game object being seen, a camera being used for viewing, a game event being followed, or a spectator reaction. [19] 19. Computer readable non-transitory medium, according to claim 15, characterized by the fact that the creation of the cinematographic adjustment includes invocation of an automatic cinematographic feature of a cinematographic module, the automatic cinematographic feature including automatically configuring a composition of a shot during a delay inserted in a video session broadcast and automatically cut off the shot before the game event occurs during the presentation. Petition 870190052027, of 6/3/2019, p. 35/36 9/9 [20] 20. Computer readable non-transitory medium, according to claim 15, characterized by the fact that video session to be an eSport video session and operations also include: receive game data collected from a plurality of player devices, the game data representing a plurality of actions performed by a plurality of players associated with the plurality of player devices, each of the plurality of actions being an action performed by the plurality of players which changes an aspect of a game environment being played in the video session; and synchronize game data with game metadata for transmission to a game server.
类似技术:
公开号 | 公开日 | 专利标题 BR112019011452A2|2019-10-15|create, stream and view 3d content US10565777B2|2020-02-18|Field of view | throttling of virtual reality | content in a head mounted display US10981061B2|2021-04-20|Method and system for directing user attention to a location based game play companion application CN109069933A|2018-12-21|Spectators visual angle in VR environment CN105359063B|2018-08-17|Utilize the head-mounted display of tracking CN109069934A|2018-12-21|Spectators' view tracking to the VR user in reality environment | CN109641152A|2019-04-16|For running the control model of particular task during game application US20180096244A1|2018-04-05|Method and system for classifying virtual reality | content based on modeled discomfort of a user TW202004421A|2020-01-16|Eye tracking with prediction and late update to GPU for fast foveated rendering in an HMD environment CN112805750A|2021-05-14|Cross-reality system US10545339B2|2020-01-28|Information processing method and information processing system JP6959365B2|2021-11-02|Shadow optimization and mesh skin adaptation in a foveal rendering system JP6767515B2|2020-10-14|How and systems to direct user attention to location-based gameplay companion applications JP6875029B1|2021-05-19|Method, program, information processing device JP2019012536A|2019-01-24|Information providing method, program, and information providing apparatus JP2018106579A|2018-07-05|Information providing method, program, and information providing apparatus
同族专利:
公开号 | 公开日 CA3046417C|2021-09-14| RU2719454C1|2020-04-17| EP3551304A1|2019-10-16| CN110121379A|2019-08-13| CA3046417A1|2018-06-14| US11266915B2|2022-03-08| US20200206640A1|2020-07-02| WO2018104791A1|2018-06-14| JP2020515316A|2020-05-28| US20180161682A1|2018-06-14| JP6746801B2|2020-08-26| US10695680B2|2020-06-30| WO2018104791A9|2018-11-22|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JPH0446801Y2|1987-07-16|1992-11-05| AU6123201A|2000-05-08|2001-11-20|Walker Digital Llc|Method and system for providing a link in an electronic file being presented to a user| US20080274798A1|2003-09-22|2008-11-06|Walker Digital Management, Llc|Methods and systems for replaying a player's experience in a casino environment| US6999083B2|2001-08-22|2006-02-14|Microsoft Corporation|System and method to provide a spectator experience for networked gaming| AU2003285875A1|2002-10-11|2004-05-04|Walker Digital, Llc|Method and apparatus for outputting a message at a game machine| US20100166056A1|2002-12-10|2010-07-01|Steve Perlman|System and method for encoding video using a selected tile and tile rotation pattern| US8491391B2|2003-03-10|2013-07-23|Igt|Regulated gaming—agile media player for controlling games| US7458894B2|2004-09-15|2008-12-02|Microsoft Corporation|Online gaming spectator system| US8766983B2|2006-05-07|2014-07-01|Sony Computer Entertainment Inc.|Methods and systems for processing an interchange of real time effects during video communication| DE102006034893A1|2006-07-25|2008-01-31|Tesa Ag|Bimodal acrylic pressure-sensitive adhesive for bonding low-energy and rough surfaces| AU2013216621B2|2007-04-19|2015-11-26|Mudalla Technology, Inc.|Regulated gaming- agile media player for controlling games| US8435115B2|2007-09-26|2013-05-07|Igt|Method and apparatus for displaying gaming content| US8515253B2|2008-02-15|2013-08-20|Sony Computer Entertainment America Llc|System and method for automated creation of video game highlights| US8632409B2|2010-05-11|2014-01-21|Bungie, Llc|Method and apparatus for online rendering of game files| US9342998B2|2010-11-16|2016-05-17|Microsoft Technology Licensing, Llc|Techniques to annotate street view images with contextual information| US10525347B2|2012-03-13|2020-01-07|Sony Interactive Entertainment America Llc|System and method for capturing and sharing console gaming data| US20140181633A1|2012-12-20|2014-06-26|Stanley Mo|Method and apparatus for metadata directed dynamic and personal data curation| CA2953806C|2014-06-27|2019-06-11|Amazon Technologies, Inc.|Spawning new timelines during game session replay| US9791919B2|2014-10-19|2017-10-17|Philip Lyren|Electronic device displays an image of an obstructed target| CA3046417C|2016-12-09|2021-09-14|Unity IPR ApS|Creating, broadcasting, and viewing 3d content|US10962780B2|2015-10-26|2021-03-30|Microsoft Technology Licensing, Llc|Remote rendering for virtual images| US9919217B2|2016-03-08|2018-03-20|Electronic Arts Inc.|Dynamic difficulty adjustment| CA3046417C|2016-12-09|2021-09-14|Unity IPR ApS|Creating, broadcasting, and viewing 3d content| US10384133B1|2016-12-30|2019-08-20|Electronic Arts Inc.|Systems and methods for automatically measuring a video game difficulty| US10341537B2|2017-09-29|2019-07-02|Sony Interactive Entertainment America Llc|Spectator view into an interactive gaming world showcased in a live event held in a real-world venue| US10839215B2|2018-05-21|2020-11-17|Electronic Arts Inc.|Artificial intelligence for emulating human playstyles| US20190373040A1|2018-05-30|2019-12-05|Infiniscene, Inc.|Systems and methods game streaming| US10713543B1|2018-06-13|2020-07-14|Electronic Arts Inc.|Enhanced training of machine learning systems based on automatically generated realistic gameplay information| US11024137B2|2018-08-08|2021-06-01|Digital Ally, Inc.|Remote video triggering and tagging| JP6570715B1|2018-08-30|2019-09-04|株式会社ドワンゴ|Distribution server, distribution system, distribution method and program| US10953334B2|2019-03-27|2021-03-23|Electronic Arts Inc.|Virtual character generation from image or video data| US10940393B2|2019-07-02|2021-03-09|Electronic Arts Inc.|Customized models for imitating player gameplay in a video game| US11110353B2|2019-07-10|2021-09-07|Electronic Arts Inc.|Distributed training for machine learning of AI controlled virtual entities on video game clients| US11103782B2|2019-09-26|2021-08-31|Sony Interactive Entertainment Inc.|Artificial intelligencecontrolled camera perspective generator and AI broadcaster| CN110732136A|2019-10-17|2020-01-31|腾讯科技(深圳)有限公司|Method, device, terminal and storage medium for previewing in-office behavior in out-office environment| US11166050B2|2019-12-11|2021-11-02|At&T Intellectual Property I, L.P.|Methods, systems, and devices for identifying viewed action of a live event and adjusting a group of resources to augment presentation of the action of the live event| US11115269B1|2020-10-20|2021-09-07|Metactix Llc|System and method for updating an application for a population of computers|
法律状态:
2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2022-02-22| B06W| Patent application suspended after preliminary examination (for patents with searches from other patent authorities) chapter 6.23 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201662432321P| true| 2016-12-09|2016-12-09| US201762551130P| true| 2017-08-28|2017-08-28| PCT/IB2017/001645|WO2018104791A1|2016-12-09|2017-12-09|Creating, broadcasting, and viewing 3d content| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|